As machine learning (ML) systems get adopted in more critical areas, it has become increasingly crucial to address the bias that could occur in these systems. Several fairness pre-processing algorithms are available to alleviate implicit biases during model training. These algorithms employ different concepts of fairness, often leading to conflicting strategies with consequential trade-offs between fairness and accuracy. In this work, we evaluate three popular fairness pre-processing algorithms and investigate the potential for combining all algorithms into a more robust pre-processing ensemble. We report on lessons learned that can help practitioners better select fairness algorithms for their models.
translated by 谷歌翻译
在本文中,提出了一种新的方法,该方法允许基于神经网络(NN)均衡器的低复杂性发展,以缓解高速相干光学传输系统中的损伤。在这项工作中,我们提供了已应用于馈电和经常性NN设计的各种深层模型压缩方法的全面描述和比较。此外,我们评估了这些策略对每个NN均衡器的性能的影响。考虑量化,重量聚类,修剪和其他用于模型压缩的尖端策略。在这项工作中,我们提出并评估贝叶斯优化辅助压缩,其中选择了压缩的超参数以同时降低复杂性并提高性能。总之,通过使用模拟和实验数据来评估每种压缩方法的复杂性及其性能之间的权衡,以完成分析。通过利用最佳压缩方法,我们表明可以设计基于NN的均衡器,该均衡器比传统的数字背部传播(DBP)均衡器具有更好的性能,并且只有一个步骤。这是通过减少使用加权聚类和修剪算法后在NN均衡器中使用的乘数数量来完成的。此外,我们证明了基于NN的均衡器也可以实现卓越的性能,同时仍然保持与完整的电子色色散补偿块相同的复杂性。我们通过强调开放问题和现有挑战以及未来的研究方向来结束分析。
translated by 谷歌翻译
这项工作使用来自建设性模拟的可靠数据比较了监督的机器学习方法,以估算空袭期间发射导弹的最有效时刻。我们采用了重采样技术来改善预测模型,分析准确性,精度,召回和F1得分。的确,我们可以根据决策树以及其他算法对重采样技术的显着敏感性来确定模型的显着性能。最佳F1分数的模型的值分别为0.379和0.465,而没有重新采样技术,这一值分别增加了22.69%。因此,如果理想,重新采样技术可以改善模型的召回率和F1得分,而准确性和精确度略有下降。因此,通过通过建设性模拟获得的数据,可以根据机器学习模型开发决策支持工具,从而可以提高BVR空中战斗的飞行质量,从而提高进攻任务的有效性以达到特定目标。
translated by 谷歌翻译
这项工作调查了使用深神经网络(DNN)来执行武器接触区域(WEZ)最大发射范围的估计。韦茨允许飞行员识别空域,其中可用导弹具有更大的成功参与特定目标的概率,即围绕着对手易受射击群体的飞机的假设区域。我们提出了一种方法来确定使用50,000个变化条件下的模拟发射的给定导弹的韦茨。这些模拟用于训练当飞机在不同的烧制条件下发现自身时,可以预测韦茨的DNN,其测定系数为0.99。它提供了另一种关于前面研究的程序,因为它采用了非离散化模型,即,它立即考虑了WEZ的所有方向,以前尚未完成。此外,所提出的方法使用实验设计,允许较少的模拟运行,提供更快的模型训练。
translated by 谷歌翻译
这项工作旨在在防御柜台(DCA)任务的背景下提供超出视觉范围(BVR)空战的参与决策支持工具。在BVR AIR作战中,接合判决是指通过假设令人反感的姿态和执行相应的演示来选择导频的时刻。为了模拟这一决定,我们使用巴西空军航空航天仿真环境(\ {Ambiente de Simula \ C {C} \〜a \〜a \〜ao ao aeroispacial - Asa}在葡萄牙语中,它产生了3,729个建设性模拟,每个建设性模拟持续12分钟,总共10,316场比赛。我们通过称为DCA指数的操作性标准分析了所有样本,这些标准基于主题专家的经验,这类使命的成功程度代表。该公制考虑了同一团队和对方团队的飞机的距离,对抗空气巡逻的点以及所使用的导弹数。通过在整个参与过程中开始和DCA指数的平均值之前定义参与状态,我们创建了一个监督的学习模型,以确定新的参与的质量。一种基于决策树的算法,与XGBoost库一起使用,提供了一种回归模型,以预测具有接近0.8的确定系数的DCA索引和0.05的根均方误差,可以为BVR飞行员提供参数以决定是否或不要搞。因此,使用通过仿真获得的数据,这项工作通过基于BVR Air战斗的机器学习构建决策支持系统而有贡献。
translated by 谷歌翻译
We present a method for solving two minimal problems for relative camera pose estimation from three views, which are based on three view correspondences of i) three points and one line and the novel case of ii) three points and two lines through two of the points. These problems are too difficult to be efficiently solved by the state of the art Groebner basis methods. Our method is based on a new efficient homotopy continuation (HC) solver framework MINUS, which dramatically speeds up previous HC solving by specializing HC methods to generic cases of our problems. We characterize their number of solutions and show with simulated experiments that our solvers are numerically robust and stable under image noise, a key contribution given the borderline intractable degree of nonlinearity of trinocular constraints. We show in real experiments that i) SIFT feature location and orientation provide good enough point-and-line correspondences for three-view reconstruction and ii) that we can solve difficult cases with too few or too noisy tentative matches, where the state of the art structure from motion initialization fails.
translated by 谷歌翻译
Numerous works use word embedding-based metrics to quantify societal biases and stereotypes in texts. Recent studies have found that word embeddings can capture semantic similarity but may be affected by word frequency. In this work we study the effect of frequency when measuring female vs. male gender bias with word embedding-based bias quantification methods. We find that Skip-gram with negative sampling and GloVe tend to detect male bias in high frequency words, while GloVe tends to return female bias in low frequency words. We show these behaviors still exist when words are randomly shuffled. This proves that the frequency-based effect observed in unshuffled corpora stems from properties of the metric rather than from word associations. The effect is spurious and problematic since bias metrics should depend exclusively on word co-occurrences and not individual word frequencies. Finally, we compare these results with the ones obtained with an alternative metric based on Pointwise Mutual Information. We find that this metric does not show a clear dependence on frequency, even though it is slightly skewed towards male bias across all frequencies.
translated by 谷歌翻译
We show for the first time that large-scale generative pretrained transformer (GPT) family models can be pruned to at least 50% sparsity in one-shot, without any retraining, at minimal loss of accuracy. This is achieved via a new pruning method called SparseGPT, specifically designed to work efficiently and accurately on massive GPT-family models. When executing SparseGPT on the largest available open-source models, OPT-175B and BLOOM-176B, we can reach 60% sparsity with negligible increase in perplexity: remarkably, more than 100 billion weights from these models can be ignored at inference time. SparseGPT generalizes to semi-structured (2:4 and 4:8) patterns, and is compatible with weight quantization approaches.
translated by 谷歌翻译
This report summarizes the work carried out by the authors during the Twelfth Montreal Industrial Problem Solving Workshop, held at Universit\'e de Montr\'eal in August 2022. The team tackled a problem submitted by CBC/Radio-Canada on the theme of Automatic Text Simplification (ATS).
translated by 谷歌翻译
Neuromorphic systems require user-friendly software to support the design and optimization of experiments. In this work, we address this need by presenting our development of a machine learning-based modeling framework for the BrainScaleS-2 neuromorphic system. This work represents an improvement over previous efforts, which either focused on the matrix-multiplication mode of BrainScaleS-2 or lacked full automation. Our framework, called hxtorch.snn, enables the hardware-in-the-loop training of spiking neural networks within PyTorch, including support for auto differentiation in a fully-automated hardware experiment workflow. In addition, hxtorch.snn facilitates seamless transitions between emulating on hardware and simulating in software. We demonstrate the capabilities of hxtorch.snn on a classification task using the Yin-Yang dataset employing a gradient-based approach with surrogate gradients and densely sampled membrane observations from the BrainScaleS-2 hardware system.
translated by 谷歌翻译